512 research outputs found

    The Pros and Cons of Compressive Sensing for Wideband Signal Acquisition: Noise Folding vs. Dynamic Range

    Full text link
    Compressive sensing (CS) exploits the sparsity present in many signals to reduce the number of measurements needed for digital acquisition. With this reduction would come, in theory, commensurate reductions in the size, weight, power consumption, and/or monetary cost of both signal sensors and any associated communication links. This paper examines the use of CS in the design of a wideband radio receiver in a noisy environment. We formulate the problem statement for such a receiver and establish a reasonable set of requirements that a receiver should meet to be practically useful. We then evaluate the performance of a CS-based receiver in two ways: via a theoretical analysis of its expected performance, with a particular emphasis on noise and dynamic range, and via simulations that compare the CS receiver against the performance expected from a conventional implementation. On the one hand, we show that CS-based systems that aim to reduce the number of acquired measurements are somewhat sensitive to signal noise, exhibiting a 3dB SNR loss per octave of subsampling, which parallels the classic noise-folding phenomenon. On the other hand, we demonstrate that since they sample at a lower rate, CS-based systems can potentially attain a significantly larger dynamic range. Hence, we conclude that while a CS-based system has inherent limitations that do impose some restrictions on its potential applications, it also has attributes that make it highly desirable in a number of important practical settings

    Sparsity and Incoherence in Compressive Sampling

    Get PDF
    We consider the problem of reconstructing a sparse signal x0Rnx^0\in\R^n from a limited number of linear measurements. Given mm randomly selected samples of Ux0U x^0, where UU is an orthonormal matrix, we show that 1\ell_1 minimization recovers x0x^0 exactly when the number of measurements exceeds mConstμ2(U)Slogn, m\geq \mathrm{Const}\cdot\mu^2(U)\cdot S\cdot\log n, where SS is the number of nonzero components in x0x^0, and μ\mu is the largest entry in UU properly normalized: μ(U)=nmaxk,jUk,j\mu(U) = \sqrt{n} \cdot \max_{k,j} |U_{k,j}|. The smaller μ\mu, the fewer samples needed. The result holds for ``most'' sparse signals x0x^0 supported on a fixed (but arbitrary) set TT. Given TT, if the sign of x0x^0 for each nonzero entry on TT and the observed values of Ux0Ux^0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples

    Measurements design and phenomena discrimination

    Get PDF
    The construction of measurements suitable for discriminating signal components produced by phenomena of different types is considered. The required measurements should be capable of cancelling out those signal components which are to be ignored when focusing on a phenomenon of interest. Under the hypothesis that the subspaces hosting the signal components produced by each phenomenon are complementary, their discrimination is accomplished by measurements giving rise to the appropriate oblique projector operator. The subspace onto which the operator should project is selected by nonlinear techniques in line with adaptive pursuit strategies

    Perceptual Compressive Sensing

    Full text link
    Compressive sensing (CS) works to acquire measurements at sub-Nyquist rate and recover the scene images. Existing CS methods always recover the scene images in pixel level. This causes the smoothness of recovered images and lack of structure information, especially at a low measurement rate. To overcome this drawback, in this paper, we propose perceptual CS to obtain high-level structured recovery. Our task no longer focuses on pixel level. Instead, we work to make a better visual effect. In detail, we employ perceptual loss, defined on feature level, to enhance the structure information of the recovered images. Experiments show that our method achieves better visual results with stronger structure information than existing CS methods at the same measurement rate.Comment: Accepted by The First Chinese Conference on Pattern Recognition and Computer Vision (PRCV 2018). This is a pre-print version (not final version

    Increased brain white matter axial diffusivity associated with fatigue, pain and hyperalgesia in Gulf War illness

    Get PDF
    Background Gulf War exposures in 1990 and 1991 have caused 25% to 30% of deployed personnel to develop a syndrome of chronic fatigue, pain, hyperalgesia, cognitive and affective dysfunction. Methods Gulf War veterans (n = 31) and sedentary veteran and civilian controls (n = 20) completed fMRI scans for diffusion tensor imaging. A combination of dolorimetry, subjective reports of pain and fatigue were correlated to white matter diffusivity properties to identify tracts associated with symptom constructs. Results Gulf War Illness subjects had significantly correlated fatigue, pain, hyperalgesia, and increased axial diffusivity in the right inferior fronto-occipital fasciculus. ROC generated thresholds and subsequent binary regression analysis predicted CMI classification based upon axial diffusivity in the right inferior fronto-occipital fasciculus. These correlates were absent for controls in dichotomous regression analysis. Conclusion The right inferior fronto-occipital fasciculus may be a potential biomarker for Gulf War Illness. This tract links cortical regions involved in fatigue, pain, emotional and reward processing, and the right ventral attention network in cognition. The axonal neuropathological mechanism(s) explaining increased axial diffusivity may account for the most prominent symptoms of Gulf War Illness

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure

    Rhinorrhea, cough and fatigue in patients taking sitagliptin

    Get PDF
    Sitagliptin is a dipeptidyl peptidase-4 (DPP IV, CD26) inhibitor indicated for treatment of Type II diabetes as a second line therapy after metformin. We report fifteen sitagliptin intolerant patients who developed anterior and posterior rhinorrhea, cough, dyspnea, and fatigue. Symptoms typically developed within 1 to 8 weeks of starting, and resolved within 1 week of stopping the drug. Peak expiratory flow rates increased 34% in 8 patients who stopped sitagliptin. Similar changes were found in 4 out of 5 persons who had confirmatory readministration. Chart review identified 17 patients who tolerated sitagliptin and had no symptomatic changes. The sitagliptin intolerant group had higher rates of clinically diagnosed allergic rhinitis (15/15 vs. 6/18; p = 0.00005), Fisher's Exact test) and angiotensin converting enzyme inhibitor - induced cough (6/13 vs. 1/18; p = 0.012). Nasal and inhaled glucocorticoids may control the underlying allergic inflammation and abrogate this new sitagliptin - induced pharmacological syndrome. Potential mucosal and central nervous system mechanisms include disruption of neuropeptides and/or cytokines that rely on DPP IV for activation or inactivation, and T cell dysfunction

    On Deterministic Sketching and Streaming for Sparse Recovery and Norm Estimation

    Full text link
    We study classic streaming and sparse recovery problems using deterministic linear sketches, including l1/l1 and linf/l1 sparse recovery problems (the latter also being known as l1-heavy hitters), norm estimation, and approximate inner product. We focus on devising a fixed matrix A in R^{m x n} and a deterministic recovery/estimation procedure which work for all possible input vectors simultaneously. Our results improve upon existing work, the following being our main contributions: * A proof that linf/l1 sparse recovery and inner product estimation are equivalent, and that incoherent matrices can be used to solve both problems. Our upper bound for the number of measurements is m=O(eps^{-2}*min{log n, (log n / log(1/eps))^2}). We can also obtain fast sketching and recovery algorithms by making use of the Fast Johnson-Lindenstrauss transform. Both our running times and number of measurements improve upon previous work. We can also obtain better error guarantees than previous work in terms of a smaller tail of the input vector. * A new lower bound for the number of linear measurements required to solve l1/l1 sparse recovery. We show Omega(k/eps^2 + klog(n/k)/eps) measurements are required to recover an x' with |x - x'|_1 <= (1+eps)|x_{tail(k)}|_1, where x_{tail(k)} is x projected onto all but its largest k coordinates in magnitude. * A tight bound of m = Theta(eps^{-2}log(eps^2 n)) on the number of measurements required to solve deterministic norm estimation, i.e., to recover |x|_2 +/- eps|x|_1. For all the problems we study, tight bounds are already known for the randomized complexity from previous work, except in the case of l1/l1 sparse recovery, where a nearly tight bound is known. Our work thus aims to study the deterministic complexities of these problems

    The road to deterministic matrices with the restricted isometry property

    Get PDF
    The restricted isometry property (RIP) is a well-known matrix condition that provides state-of-the-art reconstruction guarantees for compressed sensing. While random matrices are known to satisfy this property with high probability, deterministic constructions have found less success. In this paper, we consider various techniques for demonstrating RIP deterministically, some popular and some novel, and we evaluate their performance. In evaluating some techniques, we apply random matrix theory and inadvertently find a simple alternative proof that certain random matrices are RIP. Later, we propose a particular class of matrices as candidates for being RIP, namely, equiangular tight frames (ETFs). Using the known correspondence between real ETFs and strongly regular graphs, we investigate certain combinatorial implications of a real ETF being RIP. Specifically, we give probabilistic intuition for a new bound on the clique number of Paley graphs of prime order, and we conjecture that the corresponding ETFs are RIP in a manner similar to random matrices.Comment: 24 page
    corecore